46 research outputs found

    Sports Data Mining Technology Used in Basketball Outcome Prediction

    Get PDF
    Driven by the increasing comprehensive data in sports datasets and data mining technique successfully used in different area, sports data mining technique emerges and enables us to find hidden knowledge to impact the sport industry. In many instances, predicting the outcomes of sporting events has always been a challenging and attractive work and is therefore drawing a wide concern to conduct research in this field. This project focuses on using machine learning algorithms to build a model for predicting the NBA game outcomes and the algorithms involve Simple Logistics Classifier, Artificial Neural Networks, SVM and Naïve Bayes. In order to complete a convincing result, data of 5 regular NBA seasons was collected for model training and data of 1 NBA regular season was used as scoring dataset. After processes of automated data collection and cloud techniques enabled data management, a data mart containing NBA statistics data is built. Then machine learning models mentioned above is trained and tested by consuming data in the data mart. After applying scoring dataset to evaluate the model accuracy, Simple Logistics Classifier finally yields the best result with an accuracy of 69.67%. The results obtained are compared to other methods from different source. It was found that results of this project are more persuasive since such a vast quantity of data was applied in this project. Meanwhile, it can be referenced for the future work

    Improving Transformer-based Image Matching by Cascaded Capturing Spatially Informative Keypoints

    Full text link
    Learning robust local image feature matching is a fundamental low-level vision task, which has been widely explored in the past few years. Recently, detector-free local feature matchers based on transformers have shown promising results, which largely outperform pure Convolutional Neural Network (CNN) based ones. But correlations produced by transformer-based methods are spatially limited to the center of source views' coarse patches, because of the costly attention learning. In this work, we rethink this issue and find that such matching formulation degrades pose estimation, especially for low-resolution images. So we propose a transformer-based cascade matching model -- Cascade feature Matching TRansformer (CasMTR), to efficiently learn dense feature correlations, which allows us to choose more reliable matching pairs for the relative pose estimation. Instead of re-training a new detector, we use a simple yet effective Non-Maximum Suppression (NMS) post-process to filter keypoints through the confidence map, and largely improve the matching precision. CasMTR achieves state-of-the-art performance in indoor and outdoor pose estimation as well as visual localization. Moreover, thorough ablations show the efficacy of the proposed components and techniques.Comment: Accepted by ICCV2023, Codes will be released in https://github.com/ewrfcas/CasMT

    Learning Prior Feature and Attention Enhanced Image Inpainting

    Full text link
    Many recent inpainting works have achieved impressive results by leveraging Deep Neural Networks (DNNs) to model various prior information for image restoration. Unfortunately, the performance of these methods is largely limited by the representation ability of vanilla Convolutional Neural Networks (CNNs) backbones.On the other hand, Vision Transformers (ViT) with self-supervised pre-training have shown great potential for many visual recognition and object detection tasks. A natural question is whether the inpainting task can be greatly benefited from the ViT backbone? However, it is nontrivial to directly replace the new backbones in inpainting networks, as the inpainting is an inverse problem fundamentally different from the recognition tasks. To this end, this paper incorporates the pre-training based Masked AutoEncoder (MAE) into the inpainting model, which enjoys richer informative priors to enhance the inpainting process. Moreover, we propose to use attention priors from MAE to make the inpainting model learn more long-distance dependencies between masked and unmasked regions. Sufficient ablations have been discussed about the inpainting and the self-supervised pre-training models in this paper. Besides, experiments on both Places2 and FFHQ demonstrate the effectiveness of our proposed model. Codes and pre-trained models are released in https://github.com/ewrfcas/MAE-FAR.Comment: ECCV 202

    A Unified Prompt-Guided In-Context Inpainting Framework for Reference-based Image Manipulations

    Full text link
    Recent advancements in Text-to-Image (T2I) generative models have yielded impressive results in generating high-fidelity images based on consistent text prompts. However, there is a growing interest in exploring the potential of these models for more diverse reference-based image manipulation tasks that require spatial understanding and visual context. Previous approaches have achieved this by incorporating additional control modules or fine-tuning the generative models specifically for each task until convergence. In this paper, we propose a different perspective. We conjecture that current large-scale T2I generative models already possess the capability to perform these tasks but are not fully activated within the standard generation process. To unlock these capabilities, we introduce a unified Prompt-Guided In-Context inpainting (PGIC) framework, which leverages large-scale T2I models to re-formulate and solve reference-guided image manipulations. In the PGIC framework, the reference and masked target are stitched together as a new input for the generative models, enabling the filling of masked regions as producing final results. Furthermore, we demonstrate that the self-attention modules in T2I models are well-suited for establishing spatial correlations and efficiently addressing challenging reference-guided manipulations. These large T2I models can be effectively driven by task-specific prompts with minimal training cost or even with frozen backbones. We synthetically evaluate the effectiveness of the proposed PGIC framework across various tasks, including reference-guided image inpainting, faithful inpainting, outpainting, local super-resolution, and novel view synthesis. Our results show that PGIC achieves significantly better performance while requiring less computation compared to other fine-tuning based approaches

    Frequency tuning behaviour of terahertz quantum cascade lasers revealed by a laser beating scheme

    Get PDF
    In the terahertz frequency range, the commercialized spectrometers, such as the Fourier transform infrared and time domain spectroscopies, show spectral resolutions between a hundred megahertz and a few gigahertz. Therefore, the high precision frequency tuning ability of terahertz lasers cannot be revealed by these traditional spectroscopic techniques. In this work, we demonstrate a laser beating experiment to investigate the frequency tuning characteristics of terahertz quantum cascade lasers (QCLs) induced by temperature or drive current. Two terahertz QCLs emitting around 4.2 THz with identical active regions and laser dimensions (150 μm wide and 6 mm long) are employed in the beating experiment. One laser is operated as a frequency comb and the other one is driven at a lower current to emit a single frequency. To measure the beating signal, the single mode laser is used as a fast detector (laser self-detection). The laser beating scheme allows the high precision measurement of the frequency tuning of the single mode terahertz QCL. The experimental results show that in the investigated temperature and current ranges, the frequency tuning coefficients of the terahertz QCL are 6.1 MHz/0.1 K (temperature tuning) and 2.7 MHz/mA (current tuning) that cannot be revealed by a traditional terahertz spectrometer. The laser beating technique shows potential abilities in high precision linewidth measurements of narrow absorption lines and multi-channel terahertz communications
    corecore